skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Mislove, Alan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Detailed targeting of advertisements has long been one of the core offerings of online platforms. Unfortunately, malicious advertisers have frequently abused such targeting features, with results that range from violating civil rights laws to driving division, polarization, and even social unrest. Platforms have often attempted to mitigate this behavior by removing targeting attributes deemed problematic, such as inferred political leaning, religion, or ethnicity. In this work, we examine the effectiveness of these mitigations by collecting data from political ads placed on Facebook in the lead up to the 2022 U.S. midterm elections. We show that major political advertisers circumvented these mitigations by targeting proxy attributes: seemingly innocuous targeting criteria that closely correspond to political and racial divides in American society. We introduce novel methods for directly measuring the skew of various targeting criteria to quantify their effectiveness as proxies, and then examine the scale at which those attributes are used. Our findings have crucial implications for the ongoing discussion on the regulation of political advertising and emphasize the urgency for increased transparency. 
    more » « less
    Free, publicly-accessible full text available November 8, 2025
  2. Bluetooth-based item trackers have sparked apprehension over their potential misuse in harmful stalking and privacy violations. In response, manufacturers have implemented safety alerts to notify victims of extended tracking by unknown item trackers. In this study, we specifically investigate the anti-stalking mechanism of Apple's AirTag. We identify and analyze potential triggers of safety alerts that have not been examined in previous research, such as the local time, the victim's device model, AirTag's battery life, and the distance between the AirTag and the victim's device. Furthermore, we demonstrate a novel possibility of developing a stealthy cloned AirTag capable of tracking victims directly on the Find My app while circumventing safety alerts on the victim’s device. Our experiments demonstrate that, despite regular updates to the public key and MAC address, our cloned AirTag can provide real-time location updates even with a four months old key, thereby highlighting the challenges in designing a robust anti-stalking framework. Furthermore, we propose practical solutions to mitigate stalking risks from cloned AirTags and enhance the existing anti-stalking safeguards for AirTags. These suggestions seek to provide a foundation for similar Bluetooth-based item trackers to improve their anti-stalking protections while ensuring optimal tracking efficiency. We conducted rigorous experiments to validate our findings, ensuring their accuracy and reliability. Our evaluation highlights that safety alerts take over 8 hours to appear during the day and are more prompt during the night, particularly after 11 pm. 
    more » « less
  3. Targeted advertising remains an important part of the free web browsing experience, where advertisers' targeting and personalization algorithms together find the most relevant audience for millions of ads every day. However, given the wide use of advertising, this also enables using ads as a vehicle for problematic content, such as scams or clickbait. Recent work that explores people's sentiments toward online ads, and the impacts of these ads on people's online experiences, has found evidence that online ads can indeed be problematic. Further, there is the potential for personalization to aid the delivery of such ads, even when the advertiser targets with low specificity. In this paper, we study Facebook--one of the internet's largest ad platforms--and investigate key gaps in our understanding of problematic online advertising: (a) What categories of ads do people find problematic? (b) Are there disparities in the distribution of problematic ads to viewers? and if so, (c) Who is responsible--advertisers or advertising platforms? To answer these questions, we empirically measure a diverse sample of user experiences with Facebook ads via a 3-month longitudinal panel. We categorize over 32,000 ads collected from this panel (n = 132); and survey participants' sentiments toward their own ads to identify four categories of problematic ads. Statistically modeling the distribution of problematic ads across demographics, we find that older people and minority groups are especially likely to be shown such ads. Further, given that 22% of problematic ads had no specific targeting from advertisers, we infer that ad delivery algorithms (advertising platforms themselves) played a significant role in the biased distribution of these ads. 
    more » « less
  4. Researchers and journalists have repeatedly shown that algorithms commonly used in domains such as credit, employment, healthcare, or criminal justice can have discriminatory effects. Some organizations have tried to mitigate these effects by simply removing sensitive features from an algorithm's inputs. In this paper, we explore the limits of this approach using a unique opportunity. In 2019, Facebook agreed to settle a lawsuit by removing certain sensitive features from inputs of an algorithm that identifies users similar to those provided by an advertiser for ad targeting, making both the modified and unmodified versions of the algorithm available to advertisers. We develop methodologies to measure biases along the lines of gender, age, and race in the audiences created by this modified algorithm, relative to the unmodified one. Our results provide experimental proof that merely removing demographic features from a real-world algorithmic system's inputs can fail to prevent biased outputs. As a result, organizations using algorithms to help mediate access to important life opportunities should consider other approaches to mitigating discriminatory effects. 
    more » « less